Large-margin Structured Learning for Link Ranking
نویسندگان
چکیده
This work was supported by NSF grant CCF0937094 and IARPA via DoI/NBC contract number D12PC00337. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government. The Wikipedia puzzle globe and wordmark are trademarks of the Wikimedia Foundation. Learn more: http://psl.cs.umd.edu fλ(Y,X) ≥ fλ(Ỹ,X) + L(Y, Ỹ), ∀Ỹ ∈ Y
منابع مشابه
Perceptron-like Algorithms and Generalization Bounds for Learning to Rank
Learning to rank is a supervised learning problem where the output space is the space of rankings but the supervision space is the space of relevance scores. We make theoretical contributions to the learning to rank problem both in the online and batch settings. First, we propose a perceptron-like algorithm for learning a ranking function in an online setting. Our algorithm is an extension of t...
متن کاملPerturbation based Large Margin Approach for Ranking
The use of the standard hinge loss for structured outputs, for the learning to rank problem, faces two main caveats: (a) the label space, the set of all possible permutations of items to be ranked, is too large, and also less amenable to the usual dynamic-programming based techniques used for structured outputs, and (b) the supervision or training data consists of instances with multiple labels...
متن کاملSupervised Learning as Preference Optimization: A General Framework and its Applications
Supervised learning is characterized by a broad spectrum of learning problems, often involving structured predictions, including classification and regressions problems, ranking-based predictions (label and instance ranking), and ordinal regression in its various forms. All these different learning problems are typically addressed by specific algorithmic solutions. In this paper, we show that t...
متن کاملSimple Risk Bounds for Position-Sensitive Max-Margin Ranking Algorithms
R bounds for position-sensitive max-margin ranking algorithms can be derived straightforwardly from a structural result for Rademacher averages presented by [1]. We apply this result to pairwise and listwise hinge loss that are position-sensitive by virtue of rescaling the margin by a pairwise or listwise position-sensitive prediction loss. Similar bounds have recently been presented for probab...
متن کاملLarge margin optimization of ranking measures
Most ranking algorithms, such as pairwise ranking, are based on the optimization of standard loss functions, but the quality measure to test web page rankers is often different. We present an algorithm which aims at optimizing directly one of the popular measures, the Normalized Discounted Cumulative Gain. It is based on the framework of structured output learning, where in our case the input c...
متن کاملA Preference Optimization Based Unifying Framework for Supervised Learning Problems
Supervised learning is characterized by a broad spectrum of learning problems, often involving structured types of prediction, including classification, ranking-based predictions (label and instance ranking), and (ordinal) regression in its various forms. All these different learning problems are typically addressed by specific algorithmic solutions. In this chapter, we propose a general prefer...
متن کامل